$ kops validate cluster
Using cluster from kubectl context: CONTEXT_NAME

Validating cluster CLUSTER_NAME

INSTANCE GROUPS
NAME                            ROLE            MACHINETYPE     MIN     MAX     SUBNETS
xxxx                            xxxx            xxxxxxxxxxx     xxx     xxx     xxxxxxx

NODE STATUS
NAME    ROLE    READY

VALIDATION ERRORS
KIND    NAME            MESSAGE
dns     apiserver       Validation Failed

The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address.  The API DNS IP address is the placeholder address that kops creates: 203.0.113.123.  Please wait about 5-10 minutes for a control plane node to start, dns-controller to launch, and DNS to propagate.  The protokube container and dns-controller deployment logs may contain more diagnostic information.  Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start.

Validation Failed
Error: validation failed: cluster not yet healthy


[SOLUTION_1]
The reason why is Kops controller doesn't have the permission to change it in AWS Route53 "kops-controller.internal.xxxxxxxxxxx" dns entry in zone.
To fix this, apply this change to master IAM role.
*************************************************************************************
{
  "Version": "2012-10-17",
  "Statement":       [
       {
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets",
                "route53:GetHostedZone"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:route53:::hostedzone/${hostedzone}"
            ]
        },
        {
            "Action": [
                "route53:GetChange"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:route53:::change/*"
            ]
        },
        {
            "Action": [
                "route53:ListHostedZones"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        },
]
*************************************************************************************


[SOLUTION_2]
Check the CLUSTER_NAME and modify some changes in AWS Route53 HostedZones
* RECORD_NAME(api.xxxx.xxxx)=VALUE(MASTER_CONTROL-PLANE_PUBLIC-IP-ADDRESS)
* RECORD_NAME(api.internal.xxxx.xxxx)=VALUE(MASTER_CONTROL-PLANE_PRIVATE-IP-ADDRESS)
* RECORD_NAME(kops-controller.internal.xxxx.xxxx)=VALUE(MASTER_CONTROL-PLANE_PRIVATE-IP-ADDRESS)

[SOLUTION_3]
Open the master/control-plane with name of "masters.xxxx.xxxx" in AWS IAM Roles.
Then click-on "Add permissions -- Attach policies" --> give all type of "AmazonRoute53xxxxxxxxx" permissions to this IAM Role --> again click-on "Add permissions".


# LINK :- https://stackoverflow.com/questions/54522497/the-dns-controller-kubernetes-deployment-has-not-updated-the-kubernetes-cluster
